Boosting SVM Classifiers with Logistic Regression

نویسنده

  • Yuan-chin Ivan Chang
چکیده

The support vector machine classifier is a linear maximum margin classifier. It performs very well in many classification applications. Although, it could be extended to nonlinear cases by exploiting the idea of kernel, it might still suffer from the heterogeneity in the training examples. Since there are very few theories in the literature to guide us on how to choose kernel functions, the selection of kernel is usually based on a try-and-error manner. When the training set are imbalanced, the data set might not be linear separable in the feature space defined by the chosen kernel. In this paper, we propose a hybrid method by integrating “small” support vector machine classifiers by logistic regression models. By appropriately partitioning the training set, this ensemble classifier can improve the performance of the SVM classifier trained with whole training examples at a time. With this method, we can not only avoid the difficulty of the heterogeneity, but also have probability outputs for all examples. Moreover, it is less ambiguous than the classifiers combined with voting schemes. From our simulation studies and some empirical results, we find that these kinds of hybrid SVM classifiers are robust in the following sense: (1) It improves the performance (prediction accuracy) of the SVM classifier trained with whole training set, when there are some kind of heterogeneity in the training examples; and (2) it is at least as good as the original SVM classifier, when there is actually no heterogeneity presented in the training examples. We also apply this hybrid method to multi-class problems by replacing binary logistic regression models with polychotomous logistic regression models. Moreover, the polychotomous regression model can be constructed from individual binary logistic regression models.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparison of 14 different families of classification algorithms on 115 binary datasets

We tested 14 very different classification algorithms (random forest, gradient boosting machines, SVM linear, polynomial, and RBF 1-hidden-layer neural nets, extreme learning machines, k-nearest neighbors and a bagging of knn, naive Bayes, learning vector quantization, elastic net logistic regression, sparse linear discriminant analysis, and a boosting of linear classifiers) on 115 real life bi...

متن کامل

Do we need hundreds of classifiers to solve real world classification problems?

We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearestneighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regre...

متن کامل

Unsupervised Supervised Learning II: Training Margin Based Classifiers without Labels

Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing a margin-based risk function. Traditionally, these risk functions are computed based on a labeled dataset. We develop a novel technique for estimating such risks using only unlabeled data and the marginal label distribution. We prove that the proposed risk estimator is consistent on high-di...

متن کامل

An experimental evaluation of boosting methods for classification.

OBJECTIVES In clinical medicine, the accuracy achieved by classification rules is often not sufficient to justify their use in daily practice. In order to improve classifiers it has become popular to combine single classification rules into a classification ensemble. Two popular boosting methods will be compared with classical statistical approaches. METHODS Using data from a clinical study o...

متن کامل

Unsupervised Supervised Learning II: Margin-Based Classification without Labels

Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing margin-based risk functions. Traditionally, these risk functions are computed based on a labeled dataset. We develop a novel technique for estimating such risks using only unlabeled data and knowledge of p(y). We prove that the proposed risk estimator is consistent on high-dimensional datas...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003